Measures and Distributions

نویسنده

  • Jose-Luis Menaldi
چکیده

Measures A (set) function μ : K → [0,∞], with ∅ ∈ K ⊂ 2 and μ(∅) = 0 is called additive or finitely additive if A,B ∈ K, with A ∪ B ∈ K and A ∩ B = ∅ imply μ(A ∪ B) = μ(A) + μ(B). Similarly, μ is called σ-additive or countably additive if Ai ∈ K, with ⋃∞ i=1 Ai = A ∈ K and Ai ∩ Aj = ∅ for i 6= j imply μ(A) = ∑∞ i=1 μ(Ai). It is clear that if μ is σ-additive then μ is also additive, but the converse is false; e.g., take K = 2, with Ω an infinite set and μ(A) = 0 if A is finite and μ(A) = ∞ otherwise. Certainly, if K is a (σ-)ring then (σ)additivity is neat and written as: A,B ∈ K implies μ(A+B) = μ(A)+μ(B) or Ai ∈ K implies μ (∑∞ i=1 Ai ) = ∑∞ i=1 μ(Ai), plus the implicit condition μ(∅) = 0. Usually, the above properties are referred to as μ being (σ-)additivity on K. Definition 2.1. A set function μ : A → [0,∞] is called a measure if μ is σadditive and A is a σ-algebra. If μ also satisfies μ(Ω) = 1 then μ is called a probability measure or in short a probability. Thus a tern (Ω,A, μ) is called measure space if μ is a measure on the measurable space (Ω,A). Similarly, (Ω,F , P ) is called a probability space when P is a probability on the measurable space (Ω,F). Sometimes we use the name additive measure (or additive probability) to say that μ is finitely additive on an algebra A. If μ is an additive measure and A,B ∈ A with A ⊂ B then by writing A∪B = A+(BrA) we deduce μ(A) ≤ μ(B) (monotony) and μ(BrA) = μ(B)−μ(A) if μ(A) < ∞. Moreover, if Ai ∈ A, for i = 1, . . . , n then μ ( ⋃n i=1 Ai) ≤ ∑n i=1 μ(Ai) (subadditivity); and similarly with the sub σ-additivity if μ is a measure. Note that occasionally, we have to study measures defined on σ-rings instead of σ-algebras, e.g., see Halmos [14]. Perhaps the simpler example is the Dirac measure as follows: take a fix element x0 in Ω to define δ : 2 → [0, 1], δ(A) = 1A(x0), i.e., δ(A) is equal to 1 if x0 ∈ A and is equal to 0 otherwise. This gives rise to the discrete measures after using the fact that μ(A) = ∑∞ i=1 ai mi(A) is a measure if each mi is so and ai are nonnegative real numbers.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Information Measures via Copula Functions

In applications of differential geometry to problems of parametric inference, the notion of divergence is often used to measure the separation between two parametric densities. Among them, in this paper, we will verify measures such as Kullback-Leibler information, J-divergence, Hellinger distance, -Divergence, &hellip; and so on. Properties and results related to distance between probability d...

متن کامل

On Classification of Bivariate Distributions Based on Mutual Information

Among all measures of independence between random variables, mutual information is the only one that is based on information theory. Mutual information takes into account of all kinds of dependencies between variables, i.e., both the linear and non-linear dependencies. In this paper we have classified some well-known bivariate distributions into two classes of distributions based on their mutua...

متن کامل

On the reliability importance of system components

In reliability theory, some measures are introduced , called importance measures, to evaluate the relative importance of individual components or groups of components in a system. Importance measures are quantitive criteria that ranke the components according to their importance. In the literature, different importance measures are presented based on different scenarios. These measures can b...

متن کامل

ANALYSIS OF FINITE BUFFER RENEWAL INPUT QUEUE WITH BALKING AND MARKOVIAN SERVICE PROCESS

This paper presents the analysis of a renewal input  finite buffer queue wherein the customers can decide either to  join the queue with a probability or balk. The service process is Markovian service process ($MSP$) governed  by an underlying $m$-state Markov chain. Employing the supplementary  variable and imbedded Markov chain techniques,   the steady-state system length distributions at pre...

متن کامل

Dynamic Bayesian Information Measures

This paper introduces measures of information for Bayesian analysis when the support of data distribution is truncated progressively. The focus is on the lifetime distributions where the support is truncated at the current age t>=0. Notions of uncertainty and information are presented and operationalized by Shannon entropy, Kullback-Leibler information, and mutual information. Dynamic updatings...

متن کامل

The Tail Mean-Variance Model and Extended Efficient Frontier

In portfolio theory, it is well-known that the distributions of stock returns often have non-Gaussian characteristics. Therefore, we need non-symmetric distributions for modeling and accurate analysis of actuarial data. For this purpose and optimal portfolio selection, we use the Tail Mean-Variance (TMV) model, which focuses on the rare risks but high losses and usually happens in the tail of r...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2007